This paper explores whether some language model representations may be inherently multi-dimensional, contrasting the linear representation hypothesis. The authors develop a method using sparse autoencoders to find multi-dimensional features in GPT-2 and Mistral 7B. They find interpretable examples such as circular features representing days of the week and months of the year, which are used to solve computational problems involving modular arithmetic.
Explore the 14 top open-source Large Language Models (LLMs) available for research and commercial use. These open-source models provide transparency, no vendor lock-in, and total control over customization. This article provides detailed information about each model including parameters, license, and usage.
An in-depth guide about Mistral 7B, a 7-billion-parameter language model released by Mistral AI. This guide includes an introduction to the model, its capabilities, code generation, limitations, guardrails, and enforcing guardrails. It also covers applications, papers, and additional reading materials related to Mistral 7B and finetuned models.